Librarians have been actively collaborating and talking about it almost every day, whether it's creating tutorials and digital learning objectives or thinking about the conversations to have with instructors. It can feel like cognitive dissonance to be actively working with AI on a regular basis and also saying we're constantly thinking about the harms and the biases.
We really believe that creative control should always stay with the game creators, the game development team, and with the AI features that we are experimenting exploring, this is really to support the vision of the team. Ultimately, we want to bring AI that helps broaden the game's reach, deepen engagement and keep players coming back to your games to many more games across the catalog.
Kaplan says that he does see AI as something that could potentially help with some of the more mundane logistical sides of game development, but he feels that the technology and its peddlers are "overconfident" in what it offers. He tells a story of how he used ChatGPT to try to solve a UI problem, as that isn't his area of expertise, and the bot "overconfidently" gave him the wrong answer.
For every project that needs guardrails, there's another one where they just get in the way. Some projects demand an LLM that returns the complete, unvarnished truth. For these situations, developers are creating unfettered LLMs that can interact without reservation. Some of these solutions are based on entirely new models while others remove or reduce the guardrails built into popular open source LLMs.
In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek "red lines" to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk.
On Saturday, uninstalls of the ChatGPT mobile app skyrocked by 295 percent from the day before, according to market intelligence provider Sensor Tower. As TC noted, that's a significant leap compared to the AI chatbot's typical day-over-day uninstall rate of nine percent over the past 30 days.
We've started to notice all these things in Meta advertising, where the majority of our marketing spend is. Things like your text being used to train AI, and more and more AI things you have to opt out of - like AI pictures and AI videos that can alter the image of the thing you've uploaded quite dramatically. You have to opt out of each one, individually, every time you post something.
Rivera creditsthe creation of courses focusing on the intersection of AI and humanities with a resurgence in student interest in liberal arts degrees like English. Pre-pandemic, the number of English majors at the university was shrinking,part of a broader decline in English across the country, he said. It was a far cry from the days of over 1,500 majors and long waitlists in the early 2000s, according to Rivera. But there's been a rebound, with the number of English majors rising 9% since 2021.
We are living through one of the most disorienting periods in recorded history. The AI race is accelerating toward ever faster, ever more sophisticated automation and optimization. Agentic AI systems are moving from research labs into workplaces, healthcare, and governance. Geopolitical tensions are restructuring alliances faster than institutions can adapt. And planetary systems are signaling, with increasing urgency, that our current trajectory is unsustainable. Amid all this, it is dangerously easy to lose sight of a foundational question: What are we actually optimizing for?